Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
1.
International Journal of Image, Graphics and Signal Processing ; 13(4):13, 2022.
Article in English | ProQuest Central | ID: covidwho-2293134

ABSTRACT

To prevent medical data leakage to third parties, algorithm developers have enhanced and modified existing models and tightened the cloud security through complex processes. This research utilizes PlayFair and K-Means clustering algorithm as double-level encryption/ decryption technique with ArnoldCat maps towards securing the medical images in cloud. K-Means is used for segmenting images into pixels and auto-encoders to remove noise (de-noising);the Random Forest regressor, tree-method based ensemble model is used for classification. The study obtained CT scan-images as datasets from ‘Kaggle' and classifies the images into ‘Non-Covid' and ‘Covid' categories. The software utilized is Jupyter-Notebook, in Python. PSNR with MSE evaluation metrics is done using Python. Through testing-and-training datasets, lower MSE score (‘0') and higher PSNR score (60%) were obtained, stating that, the developed decryption/ encryption model is a good fit that enhances cloud security to preserve digital medical images.

2.
Information ; 14(3):192, 2023.
Article in English | ProQuest Central | ID: covidwho-2275231

ABSTRACT

Biometric technology is fast gaining pace as a veritable developmental tool. So far, biometric procedures have been predominantly used to ensure identity and ear recognition techniques continue to provide very robust research prospects. This paper proposes to identify and review present techniques for ear biometrics using certain parameters: machine learning methods, and procedures and provide directions for future research. Ten databases were accessed, including ACM, Wiley, IEEE, Springer, Emerald, Elsevier, Sage, MIT, Taylor & Francis, and Science Direct, and 1121 publications were retrieved. In order to obtain relevant materials, some articles were excused using certain criteria such as abstract eligibility, duplicity, and uncertainty (indeterminate method). As a result, 73 papers were selected for in-depth assessment and significance. A quantitative analysis was carried out on the identified works using search strategies: source, technique, datasets, status, and architecture. A Quantitative Analysis (QA) of feature extraction methods was carried out on the selected studies with a geometric approach indicating the highest value at 36%, followed by the local method at 27%. Several architectures, such as Convolutional Neural Network, restricted Boltzmann machine, auto-encoder, deep belief network, and other unspecified architectures, showed 38%, 28%, 21%, 5%, and 4%, respectively. Essentially, this survey also provides the various status of existing methods used in classifying related studies. A taxonomy of the current methodologies of ear recognition system was presented along with a publicly available occlussion and pose sensitive black ear image dataset of 970 images. The study concludes with the need for researchers to consider improvements in the speed and security of available feature extraction algorithms.

3.
World Wide Web ; 26(2):713-732, 2023.
Article in English | ProQuest Central | ID: covidwho-2284437

ABSTRACT

In modern days, making recommendation for news articles poses a great challenge due to vast amount of online information. However, providing personalized recommendations from news articles, which are the sources of condense textual information is not a trivial task. A recommendation system needs to understand both the textual information of a news article, and the user contexts in terms of long-term and temporary preferences via the user's historic records. Unfortunately, many existing methods do not possess the capability to meet such need. In this work, we propose a neural deep news recommendation model called CupMar, that not only is able to learn the user-profile representation in different contexts, but also is able to leverage the multi-aspects properties of a news article to provide accurate, personalized news recommendations to users. The main components of our CupMar approach include the News Encoder and the User-Profile Encoder. Specifically, the News Encoder uses multiple properties such as news category, knowledge entity, title and body content with advanced neural network layers to derive informative news representation, while the User-Profile Encoder looks through a user's browsed news, infers both of her long-term and recent preference contexts to encode a user representation, and finds the most relevant candidate news for her. We evaluate our CupMar model with extensive experiments on the popular Microsoft News Dataset (MIND), and demonstrate the strong performance of our approach.

4.
IEEE Transactions on Intelligent Transportation Systems ; 24(2):1773-1785, 2023.
Article in English | ProQuest Central | ID: covidwho-2237283

ABSTRACT

Intelligent maritime transportation is one of the most promising enabling technologies for promoting trade efficiency and releasing the physical labor force. The trajectory prediction method is the foundation to guarantee collision avoidance and route optimization for ship transportation. This article proposes a bidirectional data-driven trajectory prediction method based on Automatic Identification System (AIS) spatio-temporal data to improve the accuracy of ship trajectory prediction and reduce the risk of accidents. Our study constructs an encoder-decoder network driven by a forward and reverse comprehensive historical trajectory and then fuses the characteristics of the sub-network to predict the ship trajectory. The AIS historical trajectory data of US West Coast ships are employed to investigate the feasibility of the proposed method. Compared with the current methods, the proposed approach lessens the prediction error by studying the comprehensive historical trajectory, and 60.28% has reduced the average prediction error. The ocean and port trajectory data are analyzed in maritime transportation before and after COVID-19. The prediction error in the port area is reduced by 95.17% than the data before the epidemic. Our work helps the prediction of maritime ship trajectory, provides valuable services for maritime safety, and performs detailed insights for the analysis of trade conditions in different sea areas before and after the epidemic.

5.
International Journal of Electrical and Computer Engineering ; 13(1):957-971, 2023.
Article in English | ProQuest Central | ID: covidwho-2234587

ABSTRACT

Even though coronavirus disease 2019 (COVID-19) vaccination has been done, preparedness for the possibility of the next outbreak wave is still needed with new mutations and virus variants. A near real-time surveillance system is required to provide the stakeholders, especially the public, to act in a timely response. Due to the hierarchical structure, epidemic reporting is usually slow particularly when passing jurisdictional borders. This condition could lead to time gaps for public awareness of new and emerging events of infectious diseases. Online news is a potential source for COVID-19 monitoring because it reports almost every infectious disease incident globally. However, the news does not report only about COVID-19 events, but also various information related to COVID-19 topics such as the economic impact, health tips, and others. We developed a framework for online news monitoring and applied sentence classification for news titles using deep learning to distinguish between COVID-19 events and non-event news. The classification results showed that the fine-tuned bidirectional encoder representations from transformers (BERT) trained with Bahasa Indonesia achieved the highest performance (accuracy: 95.16%, precision: 94.71%, recall: 94.32%, F1-score: 94.51%). Interestingly, our framework was able to identify news that reports the new COVID strain from the United Kingdom (UK) as an event news, 13 days before the Indonesian officials closed the border for foreigners.

6.
Applied Sciences ; 12(17):8398, 2022.
Article in English | ProQuest Central | ID: covidwho-2023104

ABSTRACT

Fake news detection techniques are a topic of interest due to the vast abundance of fake news data accessible via social media. The present fake news detection system performs satisfactorily on well-balanced data. However, when the dataset is biased, these models perform poorly. Additionally, manual labeling of fake news data is time-consuming, though we have enough fake news traversing the internet. Thus, we introduce a text augmentation technique with a Bidirectional Encoder Representation of Transformers (BERT) language model to generate an augmented dataset composed of synthetic fake data. The proposed approach overcomes the issue of minority class and performs the classification with the AugFake-BERT model (trained with an augmented dataset). The proposed strategy is evaluated with twelve different state-of-the-art models. The proposed model outperforms the existing models with an accuracy of 92.45%. Moreover, accuracy, precision, recall, and f1-score performance metrics are utilized to evaluate the proposed strategy and demonstrate that a balanced dataset significantly affects classification performance.

7.
Atmosphere ; 13(7):1023, 2022.
Article in English | ProQuest Central | ID: covidwho-1963692

ABSTRACT

(1) Background: To better carry out air pollution control and to assist in accurate investigations of air pollution, in this study, we fully explore the spatial distribution characteristics of air pollution complaint results and provide guidance for air pollution control by combining regional air monitoring data. (2) Methods: By selecting the air pollution complaint information in Beijing from 2019 to 2020, in this study, we extract the names and addresses of complaint points, as well as the complaint times and types by adopting the BERT (bidirectional encoder representations from transformers) + CRF (conditional random field) model deep learning method. Moreover, through further filtering and processing of the complaint points’ address information, we achieve address matching and spatial positioning of the complaint points, and realize the regional spatial representation of air pollution complaints in Beijing in the form of a heat map. (3) Results: The experimental results are compared and analyzed with the ranking data of total suspended particulate (TSP) concentration of townships (streets) in Beijing during the same period, indicating that the key areas of air pollution complaints have a high correlation with the key polluted township (street) areas. The distribution of complaints and the types of complaints in each township (street) differ according to the population density in each township (street), the level of education, and economic activity. (4) Conclusions: The results of this study show that the public, as the intuitive perceiver of air pollution, is sensitive to the air pollution situation at a smaller spatial scale;furthermore, complaints can provide guidance and reference for the direction of air pollution control and law enforcement investigations when coupled with geographical features and economic status.

8.
International Journal of Environmental Research and Public Health ; 19(9):5258, 2022.
Article in English | ProQuest Central | ID: covidwho-1837928

ABSTRACT

A successful interprofessional faculty development program was transformed into a more clinically focused professional development opportunity for both faculty and clinicians. Discipline-specific geriatric competencies and the Interprofessional Education Collaborative (IPEC) competencies were aligned to the 4Ms framework. The goal of the resulting program, Creating Interprofessional Readiness for Complex and Aging Adults (CIRCAA), was to advance an age-friendly practice using evidence-based strategies to support wellness and improve health outcomes while also addressing the social determinants of health (SDOH). An interprofessional team employed a multidimensional approach to create age-friendly, person-centered practitioners. In this mixed methods study, questionnaires were disseminated and focus groups were conducted with two cohorts of CIRCAA scholars to determine their ability to incorporate learned evidence-based strategies into their own practice environments. Themes and patterns were identified among transcribed interview recordings. Multiple coders were used to identify themes and patterns and inter-coder reliability was assessed. The findings indicate that participants successfully incorporated age-friendly principles and best practices into their own work environments and escaped the silos of their disciplines through the implementation of their capstone projects. Quantitative data supported qualitative themes and revealed gains in knowledge of critical components of age-friendly healthcare and perceptions of interprofessional collaborative care. These results are discussed within a new conceptual framework for studying the multidimensional complexity of what it means to be age-friendly. Our findings suggest that programs such as CIRCAA have the potential to improve older adults’ health by addressing SDOH, advancing age-friendly and patient-centered care, and promoting an interprofessional model of evidence-based practice.

9.
Electronics ; 11(8):1284, 2022.
Article in English | ProQuest Central | ID: covidwho-1809790

ABSTRACT

In Internet-of-Media-Things (IoMT) environments, users can access and view high-quality Over-the-Top (OTT) media services anytime and anywhere. As the number of OTT platform users has increased, the original content offered by such OTT platforms has become very popular, further increasing the number of users. Therefore, effective resource-management technology is an essential aspect for reducing service-operation costs by minimizing unused resources while securing the resources necessary to provide media services in a timely manner when the user’s resource-demand rates change rapidly. However, previous studies have investigated efficient cloud-resource allocation without considering the number of users after the release of popular content. This paper proposes a technology for predicting and allocating cloud resources in the form of a Long-Short-Term-Memory (LSTM)-based reinforcement-learning method that provides information for OTT service providers about whether users are willing to watch popular content using the Korean Bidirectional Encoder Representation from Transformer (KoBERT). Results of simulating the proposed technology verified that efficient resource allocation can be achieved by maintaining service quality while reducing cloud-resource waste depending on whether content popularity is disclosed.

10.
IEEE Transactions on Instrumentation and Measurement ; 71:1-13, 2022.
Article in English | ProQuest Central | ID: covidwho-1779161

ABSTRACT

COronaVIrus Disease 2019 (COVID-19) emerged as a global pandemic in the last two years. Typical abnormal findings in chest computed tomography (CT) images of COVID-19 patients are ground-glass opacities (GGOs) and consolidation, which signify the extent of damage caused to the lungs. The manual annotation of these abnormalities for severity analysis is complex, tedious, and time-consuming. This motivated us to develop a vision-based analysis framework for automated segmentation of lung abnormalities. We proposed a deep learning framework, namely LwMLA-NET “Lightweight Multi-Level Attention-based NETwork,” to segment GGO and consolidation. The LwMLA-NET is based on an encoder–decoder architecture where depth-wise separable convolutions are employed at each stage, making it a light-weighted framework that significantly reduces the computational cost. Another distinguishable module in LwMLA-NET is the multilevel attention (MLA) mechanism that focuses on dominant and relevant features and avoids propagation of insignificant features from the encoder to the decoder, thereby aiding faster optimization. Integrating the atrous spatial pyramid pooling (ASPP) module in the bottleneck helps to handle scale variations. The LwMLA-NET was evaluated on two databases—MedSeg and Radiopedia—and obtained F1-scores of 76.7% and 73.1%, respectively. The experimental evaluation justifies that LwMLA-NET outperforms other state-of-the-art deep learning frameworks like Attention U-Net, PSP-Net, Cople-Net, Inf-Net, and Mobile-Net V2 in terms of segmentation performance and has an acceptable generalization capability. Moreover, our team, “LwMLA-NET-Team-KDD-JU,” participated in Kaggle’s “COVID-19 CT Images Segmentation” open challenge. The performance was evaluated on a separate test set, and we obtained fourth rank on the leaderboard among 40 teams with an F1-score of 71.196%.

11.
International Journal on Electrical Engineering and Informatics ; 13(4):801-812, 2021.
Article in English | ProQuest Central | ID: covidwho-1716210

ABSTRACT

The pre-trained word embedding models become widely used in Natural Language Processing (NLP), but they disregard the context and sense of the text. We study in this paper, the capacity of pre-trained BERT model (Bidirectional Encoder Representations from Transformers) for the Arabic language to classify Arabic tweets using a hybrid network of two famous models;Bidirectional Long Short Term Memory (BiLSTM) and Gated Recurrent Unit (GRU) inspired by the great achievement of deep learning algorithms. In this context, we finetuned the Arabic BERT (AraBERT) parameters and we used it on three merged datasets to impart its knowledge for the Arabic sentiment analysis. For that, we lead the experiments by comparing the AraBERT model in one hand in the word embedding phase, with a statics pretrained word embeddings method namely AraVec and FastText, and on another hand in the classification phase, we compared the hybrid model with convolutional neural network (CNN), long short-term memory (LSTM), BiLSTM, and GRU, which are prevalently preferred in sentiment analysis. The results demonstrate that the fine-tuned AraBERT model, combined with the hybrid network, achieved peak performance with up to 94% accuracy.

12.
Algorithms ; 15(2):71, 2022.
Article in English | ProQuest Central | ID: covidwho-1709736

ABSTRACT

Deep learning uses artificial neural networks to recognize patterns and learn from them to make decisions. Deep learning is a type of machine learning that uses artificial neural networks to mimic the human brain. It uses machine learning methods such as supervised, semi-supervised, or unsupervised learning strategies to learn automatically in deep architectures and has gained much popularity due to its superior ability to learn from huge amounts of data. It was found that deep learning approaches can be used for big data analysis successfully. Applications include virtual assistants such as Alexa and Siri, facial recognition, personalization, natural language processing, autonomous cars, automatic handwriting generation, news aggregation, the colorization of black and white images, the addition of sound to silent films, pixel restoration, and deep dreaming. As a review, this paper aims to categorically cover several widely used deep learning algorithms along with their architectures and their practical applications: backpropagation, autoencoders, variational autoencoders, restricted Boltzmann machines, deep belief networks, convolutional neural networks, recurrent neural networks, generative adversarial networks, capsnets, transformer, embeddings from language models, bidirectional encoder representations from transformers, and attention in natural language processing. In addition, challenges of deep learning are also presented in this paper, such as AutoML-Zero, neural architecture search, evolutionary deep learning, and others. The pros and cons of these algorithms and their applications in healthcare are explored, alongside the future direction of this domain. This paper presents a review and a checkpoint to systemize the popular algorithms and to encourage further innovation regarding their applications. For new researchers in the field of deep learning, this review can help them to obtain many details about the advantages, disadvantages, applications, and working mechanisms of a number of deep learning algorithms. In addition, we introduce detailed information on how to apply several deep learning algorithms in healthcare, such as in relation to the COVID-19 pandemic. By presenting many challenges of deep learning in one section, we hope to increase awareness of these challenges, and how they can be dealt with. This could also motivate researchers to find solutions for these challenges.

SELECTION OF CITATIONS
SEARCH DETAIL